# 8B Parameter Scale
Deepseek R1 0528 Qwen3 8B AWQ 4bit
MIT
The AWQ quantized version of DeepSeek-R1-0528-Qwen3-8B, suitable for efficient inference in specific scenarios.
Large Language Model
Transformers

D
hxac
179
2
Qwen3 8B Grpo Medmcqa
A fine-tuned version based on Qwen/Qwen3-8B using the medmcqa-grpo dataset, specialized in medical multiple-choice question answering tasks
Large Language Model
Transformers

Q
mlxha
84
1
Fibonacci 1 EN 8b Chat.p1 5
MIT
Fibonacci-1-EN-8b-Chat.P1_5 is a large language model based on the LLaMA architecture, with 8.03 billion parameters, optimized for natural language processing tasks and text dialogue.
Large Language Model Supports Multiple Languages
F
fibonacciai
132
11
Tanuki 8B Dpo V1.0
Apache-2.0
Tanuki-8B is an 8B-parameter Japanese large language model optimized for dialogue tasks through SFT and DPO, developed by GENIAC Matsuo Lab
Large Language Model
Transformers Supports Multiple Languages

T
weblab-GENIAC
1,143
41
Orca Mini V6 8b
Llama 3 8B parameter-scale language model trained on various SFT datasets, supporting text generation tasks
Large Language Model
Transformers English

O
pankajmathur
21
2
Llama 3 Chinese 8b Instruct V2
Apache-2.0
A Chinese instruction model fine-tuned based on Meta-Llama-3-8B-Instruct, suitable for dialogue, Q&A, and similar scenarios.
Large Language Model
Transformers Supports Multiple Languages

L
hfl
49
39
Featured Recommended AI Models